3,299 research outputs found

    A study of QoS support for real time multimedia communication over IEEE802.11 WLAN : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering, Massey University, Albany, New Zealand

    Get PDF
    Quality of Service (QoS) is becoming a key problem for Real Time (RT) traffic transmitted over Wireless Local Area Network (WLAN). In this project the recent proposals for enhanced QoS performance for RT multimedia is evaluated and analyzed. Two simulation models for EDCF and HCF protocols are explored using OPNET and NS-2 simulation packages respectively. From the results of the simulation, we have studied the limitations of the 802.1 le standard for RT multimedia communication and analysed the reasons of the limitations happened and proposed the solutions for improvement. Since RT multimedia communication encompasses time-sensitive traffic, the measure of quality of service generally is minimal delay (latency) and delay variation (jitter). 802.11 WLAN standard focuses on the PHY layer and the MAC layer. The transmitted data rate on PHY layer are increased on standards 802.1 lb, a, g, j, n by different code mapping technologies while 802.1 le is developed specially for the QoS performance of RT-traffics at the MAC layer. Enhancing the MAC layer protocols are the significant topic for guaranteeing the QoS performance of RT-traffics. The original MAC protocols of 802.11 are DCF (Distributed Coordination Function) and PCF (Point Coordinator Function). They cannot achieve the required QoS performance for the RT-traffic transmission. IEEE802.lle draft has developed EDCF and HCF instead. Simulation results of EDCF and HCF models that we explored by OPNET and NS-2, show that minimal latency and jitter can be achieved. However, the limitations of EDCF and HCF are identified from the simulation results. EDCF is not stable under the high network loading. The channel utilization is low by both protocols. Furthermore, the fairness index is very poor by the HCF. It means the low priority traffic should starve in the WLAN network. All these limitations are due to the priority mechanism of the protocols. We propose a future work to develop dynamic self-adaptive 802.11c protocol as practical research directions. Because of the uncertainly in the EDCF in the heavy loading, we can add some parameters to the traffic loading and channel condition efficiently. We provide indications for adding some parameters to increase the EDCF performance and channel utilization. Because all the limitations are due to the priority mechanism, the other direction is doing away with the priority rule for reasonable bandwidth allocation. We have established that the channel utilization can be increased and collision time can be reduced for RT-traffics over the EDCF protocol. These parameters can include loading rate, collision rate and total throughput saturation. Further simulation should look for optimum values for the parameters. Because of the huge polling-induced overheads, HCF has the unsatisfied tradeoff. This leads to poor fairness and poor throughput. By developing enhanced HCF it may be possible to enhance the RI polling interval and TXOP allocation mechanism to get better fairness index and channel utilization. From the simulation, we noticed that the traffics deployment could affect the total QoS performance, an indication to explore whether the classification of traffics deployments to different categories is a good idea. With different load-based traffic categories, QoS may be enhanced by appropriate bandwidth allocation Strategy

    Knowledge Refinement via Rule Selection

    Full text link
    In several different applications, including data transformation and entity resolution, rules are used to capture aspects of knowledge about the application at hand. Often, a large set of such rules is generated automatically or semi-automatically, and the challenge is to refine the encapsulated knowledge by selecting a subset of rules based on the expected operational behavior of the rules on available data. In this paper, we carry out a systematic complexity-theoretic investigation of the following rule selection problem: given a set of rules specified by Horn formulas, and a pair of an input database and an output database, find a subset of the rules that minimizes the total error, that is, the number of false positive and false negative errors arising from the selected rules. We first establish computational hardness results for the decision problems underlying this minimization problem, as well as upper and lower bounds for its approximability. We then investigate a bi-objective optimization version of the rule selection problem in which both the total error and the size of the selected rules are taken into account. We show that testing for membership in the Pareto front of this bi-objective optimization problem is DP-complete. Finally, we show that a similar DP-completeness result holds for a bi-level optimization version of the rule selection problem, where one minimizes first the total error and then the size

    Generic Multisensor Integration Strategy and Innovative Error Analysis for Integrated Navigation

    Get PDF
    A modern multisensor integrated navigation system applied in most of civilian applications typically consists of GNSS (Global Navigation Satellite System) receivers, IMUs (Inertial Measurement Unit), and/or other sensors, e.g., odometers and cameras. With the increasing availabilities of low-cost sensors, more research and development activities aim to build a cost-effective system without sacrificing navigational performance. Three principal contributions of this dissertation are as follows: i) A multisensor kinematic positioning and navigation system built on Linux Operating System (OS) with Real Time Application Interface (RTAI), York University Multisensor Integrated System (YUMIS), was designed and realized to integrate GNSS receivers, IMUs, and cameras. YUMIS sets a good example of a low-cost yet high-performance multisensor inertial navigation system and lays the ground work in a practical and economic way for the personnel training in following academic researches. ii) A generic multisensor integration strategy (GMIS) was proposed, which features a) the core system model is developed upon the kinematics of a rigid body; b) all sensor measurements are taken as raw measurement in Kalman filter without differentiation. The essential competitive advantages of GMIS over the conventional error-state based strategies are: 1) the influences of the IMU measurement noises on the final navigation solutions are effectively mitigated because of the increased measurement redundancy upon the angular rate and acceleration of a rigid body; 2) The state and measurement vectors in the estimator with GMIS can be easily expanded to fuse multiple inertial sensors and all other types of measurements, e.g., delta positions; 3) one can directly perform error analysis upon both raw sensor data (measurement noise analysis) and virtual zero-mean process noise measurements (process noise analysis) through the corresponding measurement residuals of the individual measurements and the process noise measurements. iii) The a posteriori variance component estimation (VCE) was innovatively accomplished as an advanced analytical tool in the extended Kalman Filter employed by the GMIS, which makes possible the error analysis of the raw IMU measurements for the very first time, together with the individual independent components in the process noise vector

    REAL-TIME MODEL PREDICTIVE CONTROL OF QUASI-KEYHOLE PIPE WELDING

    Get PDF
    Quasi-keyhole, including plasma keyhole and double-sided welding, is a novel approach proposed to operate the keyhole arc welding process. It can result in a high quality weld, but also raise higher demand of the operator. A computer control system to detect the keyhole and control the arc current can improve the performance of the welding process. To this effect, developing automatic pipe welding, instead of manual welding, is a hot research topic in the welding field. The objective of this research is to design an automatic quasi-keyhole pipe welding system that can monitor the keyhole and control its establishment time to track the reference trajectory as the dynamic behavior of welding processes changes. For this reason, an automatic plasma welding system is proposed, in which an additional electrode is added on the back side of the workpiece to detect the keyhole, as well as to provide the double-side arc in the double-sided arc welding mode. In the automatic pipe welding system the arc current can be controlled by the computer controller. Based on the designed automatic plasma pipe welding system, two kinds of model predictive controller − linear and bilinear − are developed, and an optimal algorithm is designed to optimize the keyhole weld process. The result of the proposed approach has been verified by using both linear and bilinear model structures in the quasi-keyhole plasma welding (QKPW) process experiments, both in normal plasma keyhole and double-sided arc welding modes

    FINANCIAL LEVERAGE AND FIRM PERFORMANCE DURING AND AFTER THE FINANCIAL CRISIS

    Get PDF
    The objective of this paper is to analyse the relation between a company’sleverage and its performance during the financial crisis of 2007-­2009. A hypothesisis proposed that leverage would negatively impact abnormal return during thefinancial crisis. Interestingly, it is found that, at the peak of the crisis, during 2008,firms with higher leverage performed better. The opposite effect is found in 2009,when firms with high leverage under-­performed. These results seem somewhatcounter-­intuitive, so that after taking into account industry effects the resultsindicate that leverage had a negative effect on companies’ performance during the2008-­2009 period

    Domain Adaptive Dialog Generation via Meta Learning

    Full text link
    Domain adaptation is an essential task in dialog system building because there are so many new dialog tasks created for different needs every day. Collecting and annotating training data for these new tasks is costly since it involves real user interactions. We propose a domain adaptive dialog generation method based on meta-learning (DAML). DAML is an end-to-end trainable dialog system model that learns from multiple rich-resource tasks and then adapts to new domains with minimal training samples. We train a dialog system model using multiple rich-resource single-domain dialog data by applying the model-agnostic meta-learning algorithm to dialog domain. The model is capable of learning a competitive dialog system on a new domain with only a few training examples in an efficient manner. The two-step gradient updates in DAML enable the model to learn general features across multiple tasks. We evaluate our method on a simulated dialog dataset and achieve state-of-the-art performance, which is generalizable to new tasks.Comment: Accepted as a long paper in ACL 201

    Learning Audio Sequence Representations for Acoustic Event Classification

    Full text link
    Acoustic Event Classification (AEC) has become a significant task for machines to perceive the surrounding auditory scene. However, extracting effective representations that capture the underlying characteristics of the acoustic events is still challenging. Previous methods mainly focused on designing the audio features in a 'hand-crafted' manner. Interestingly, data-learnt features have been recently reported to show better performance. Up to now, these were only considered on the frame-level. In this paper, we propose an unsupervised learning framework to learn a vector representation of an audio sequence for AEC. This framework consists of a Recurrent Neural Network (RNN) encoder and a RNN decoder, which respectively transforms the variable-length audio sequence into a fixed-length vector and reconstructs the input sequence on the generated vector. After training the encoder-decoder, we feed the audio sequences to the encoder and then take the learnt vectors as the audio sequence representations. Compared with previous methods, the proposed method can not only deal with the problem of arbitrary-lengths of audio streams, but also learn the salient information of the sequence. Extensive evaluation on a large-size acoustic event database is performed, and the empirical results demonstrate that the learnt audio sequence representation yields a significant performance improvement by a large margin compared with other state-of-the-art hand-crafted sequence features for AEC
    • …
    corecore